Open World Object Detection (OWOD) is a new and challenging computer vision task that bridges the gap between classic object detection (OD) benchmarks and object detection in the real world. In addition to detecting and classifying seen/labeled objects, OWOD algorithms are expected to detect novel/unknown objects - which can be classified and incrementally learned. In standard OD, object proposals not overlapping with a labeled object are automatically classified as background. Therefore, simply applying OD methods to OWOD fails as unknown objects would be predicted as background. The challenge of detecting unknown objects stems from the lack of supervision in distinguishing unknown objects and background object proposals. Previous OWOD methods have attempted to overcome this issue by generating supervision using pseudo-labeling - however, unknown object detection has remained low. Probabilistic/generative models may provide a solution for this challenge. Herein, we introduce a novel probabilistic framework for objectness estimation, where we alternate between probability distribution estimation and objectness likelihood maximization of known objects in the embedded feature space - ultimately allowing us to estimate the objectness probability of different proposals. The resulting Probabilistic Objectness transformer-based open-world detector, PROB, integrates our framework into traditional object detection models, adapting them for the open-world setting. Comprehensive experiments on OWOD benchmarks show that PROB outperforms all existing OWOD methods in both unknown object detection ($\sim 2\times$ unknown recall) and known object detection ($\sim 10\%$ mAP). Our code will be made available upon publication at https://github.com/orrzohar/PROB.
translated by 谷歌翻译
Neural Representations have recently been shown to effectively reconstruct a wide range of signals from 3D meshes and shapes to images and videos. We show that, when adapted correctly, neural representations can be used to directly represent the weights of a pre-trained convolutional neural network, resulting in a Neural Representation for Neural Networks (NeRN). Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network based on its position in the architecture, and optimize a predictor network to map coordinates to their corresponding weights. Similarly to the spatial smoothness of visual scenes, we show that incorporating a smoothness constraint over the original network's weights aids NeRN towards a better reconstruction. In addition, since slight perturbations in pre-trained model weights can result in a considerable accuracy loss, we employ techniques from the field of knowledge distillation to stabilize the learning process. We demonstrate the effectiveness of NeRN in reconstructing widely used architectures on CIFAR-10, CIFAR-100, and ImageNet. Finally, we present two applications using NeRN, demonstrating the capabilities of the learned representations.
translated by 谷歌翻译
在元加强学习(META RL)中,代理商从一组培训任务中学习如何快速解决从相同的任务分布中绘制的新任务。最佳的元rl政策,又称贝叶斯最佳行为,是很好的定义,并保证了对任务分布的预期最佳奖励。我们在这项工作中探讨的问题是,需要多少培训任务来确保具有很高可能性的大致最佳行为。最近的工作为无模型设置提供了第一个这样的PAC分析,其中从培训任务中学到了依赖历史的政策。在这项工作中,我们提出了一种不同的方法:使用密度估计技术直接学习任务分布,然后对学习任务分布进行培训。我们表明,我们的方法导致界限取决于任务分布的维度。特别是,在任务分布中处于低维多方面的环境中,我们将分析扩展到使用降低性降低技术并说明这种结构,从而比以前的工作明显更好,这严格取决于状态和行动的数量。我们方法的关键是内核密度估计方法所隐含的正则化。我们进一步证明,当“插入”最先进的Varibad Meta RL算法时,这种正则化在实践中很有用。
translated by 谷歌翻译
Foundation Models (FMs) are models trained on large corpora of data that, at very large scale, can generalize to new tasks without any task-specific finetuning. As these models continue to grow in size, innovations continue to push the boundaries of what these models can do on language and image tasks. This paper aims to understand an underexplored area of FMs: classical data tasks like cleaning and integration. As a proof-of-concept, we cast five data cleaning and integration tasks as prompting tasks and evaluate the performance of FMs on these tasks. We find that large FMs generalize and achieve SoTA performance on data cleaning and integration tasks, even though they are not trained for these data tasks. We identify specific research challenges and opportunities that these models present, including challenges with private and domain specific data, and opportunities to make data management systems more accessible to non-experts. We make our code and experiments publicly available at: https://github.com/HazyResearch/fm_data_tasks.
translated by 谷歌翻译
Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the FlexiBiT framework, which provides a unified way to specify models which can be trained on many different sequential decision making tasks. We show that a single FlexiBiT model is simultaneously capable of carrying out many tasks with performance similar to or better than specialized models. Additionally, we show that performance can be further improved by fine-tuning our general model on specific tasks of interest.
translated by 谷歌翻译
在经典曲线图中,给定实值曲线图信号,其曲线图傅里叶变换通常被定义为信号和图表拉普拉斯的每个特征向量之间的内部产品。不幸的是,在矢量值图表信号的情况下,该定义在数学上没有数学上有效,然而,在最先进的图表学习建模和分析中是典型的操作数。因此,寻求向矢量值信号解码的广义转换,因此本文的主要目的是本文的主要目的。探索了几次尝试,并且还发现在邻接等级的分层水平下进行转换,有助于更容易提高信号的光谱特性。拟议的方法被引入为一个新工具,协助图表学习模型的诊断和分析行为。
translated by 谷歌翻译
深神经网络(DNN)是用于压缩和蒸馏信息的强大工具。由于它们的规模和复杂性,通常涉及数十亿间相互作用的内部自由度,精确分析方法通常会缩短。这种情况下的共同策略是识别平均潜在的快速微观变量的不稳定行为的缓慢自由度。在这里,我们在训练结束时识别在过度参数化的深卷积神经网络(CNNS)中发生的尺度的分离。它意味着神经元预激活与几乎高斯的方式与确定性潜在内核一起波动。在对于具有无限许多频道的CNN来说,这些内核是惰性的,对于有限的CNNS,它们以分析的方式通过数据适应和学习数据。由此产生的深度学习的热力学理论产生了几种深度非线性CNN玩具模型的准确预测。此外,它还提供了新的分析和理解CNN的方法。
translated by 谷歌翻译
随着机器学习(ML)模型和系统在不同行业的高赌注环境中的增加,保证了部署后的模型的性能变得至关重要。生产中的监测模型是确保其持续性能和可靠性的关键方面。我们展示了Amazon Sagemaker Model Monitor,这是一个完全托管的服务,不断监控亚马逊Sagemaker上托管的机器学习模型的质量。我们的系统实时地自动检测模型中的数据,概念,偏置和特征归因漂移,并提供警报,以便模型所有者可以采取纠正措施,从而保持高质量模型。我们描述了从客户,系统设计和架构获得的关键要求以及用于检测不同类型漂移的方法。此外,我们提供量化评估,然后使用案例,见解和从超过1.5年的生产部署中汲取的经验教训。
translated by 谷歌翻译
在现实世界中的机器人在现实环境中的许多可能的应用领域都铰接机器人掌握物体的能力。因此,机器人Grasping多年来一直是有效的研究领域。通过我们的出版物,我们有助于使机器人能够掌握,特别关注垃圾桶采摘应用。垃圾拣选尤其挑战,由于经常杂乱和非结构化的物体排列以及通过简单的顶部掌握的物体的频繁避免的避神。为了解决这些挑战,我们提出了一种基于软演员 - 评论家(SAC)的混合离散调整的完全自我监督的强化学习方法。我们使用参数化运动原语来推动和抓握运动,以便为我们考虑的困难设置启用灵活的适应行为。此外,我们使用数据增强来提高样本效率。我们证明了我们提出的关于具有挑战性的采摘情景的方法,其中平面掌握学习或行动离散化方法会面临很大困难
translated by 谷歌翻译
考虑一个结构化的特征数据集,例如$ \ {\ textrm {sex},\ textrm {compy},\ textrm {race},\ textrm {shore} \} $。用户可能希望在特征空间观测中集中在哪里,并且它稀疏或空的位置。大稀疏或空区域的存在可以提供软或硬特征约束的域知识(例如,典型的收入范围是什么,或者在几年的工作经验中可能不太可能拥有高收入)。此外,这些可以建议用户对稀疏或空区域中的数据输入的机器学习(ML)模型预测可能是不可靠的。可解释的区域是一个超矩形,例如$ \ {\ textrm {rame} \ in \ {\ textrm {black},\ textrm {white} \} \} \} \&$ $ \ {10 \ leq \ :\ textrm {体验} \:\ leq 13 \} $,包含满足约束的所有观察;通常,这些区域由少量特征定义。我们的方法构造了在数据集中观察到的特征空间的基于观察密度的分区。它与其他人具有许多优点,因为它适用于原始域中的混合类型(数字或分类)的特征,也可以分开空区域。从可视化可以看出,所产生的分区符合人眼可能识别的空间分组;因此,结果应延伸到更高的尺寸。我们还向其他数据分析任务展示了一些应用程序,例如推断M1模型误差,测量高尺寸密度可变性以及治疗效果的因果推理。通过分区区域的超矩形形式可以实现许多这些应用。
translated by 谷歌翻译